markov dataset
- North America > United States > New York (0.04)
- North America > United States > California (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > Middle East > Jordan (0.04)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.77)
Generalization Bounds for Equivariant Networks on Markov Data
Li, Hui, Wang, Zhiguo, Chen, Bohui, Sheng, Li
Equivariant neural networks play a pivotal role in analyzing datasets with symmetry properties, particularly in complex data structures. However, integrating equivariance with Markov properties presents notable challenges due to the inherent dependencies within such data. Previous research has primarily concentrated on establishing generalization bounds under the assumption of independently and identically distributed data, frequently neglecting the influence of Markov dependencies. In this study, we investigate the impact of Markov properties on generalization performance alongside the role of equivariance within this context. We begin by applying a new McDiarmid's inequality to derive a generalization bound for neural networks trained on Markov datasets, using Rademacher complexity as a central measure of model capacity. Subsequently, we utilize group theory to compute the covering number under equivariant constraints, enabling us to obtain an upper bound on the Rademacher complexity based on this covering number. This bound provides practical insights into selecting low-dimensional irreducible representations, enhancing generalization performance for fixed-width equivariant neural networks.
Generalization Error Bounds on Deep Learning with Markov Datasets
In this paper, we derive upper bounds on generalization errors for deep neural networks with Markov datasets. These bounds are developed based on Koltchinskii and Panchenko's approach for bounding the generalization error of combined classifiers with i.i.d. The development of new symmetrization inequalities in high-dimensional probability for Markov chains is a key element in our extension, where the spectral gap of the infinitesimal generator of the Markov chain plays a key parameter in these inequalities. We also propose a simple method to convert these bounds and other similar ones in traditional deep learning and machine learning to Bayesian counterparts for both i.i.d. and Markov datasets. Extensions to m -order homogeneous Markov chains such as AR and ARMA models and mixtures of several Markov data services are given.
On Rademacher Complexity-based Generalization Bounds for Deep Learning
In this paper, we develop some novel bounds for the Rademacher complexity and the generalization error in deep learning with i.i.d. and Markov datasets. The new Rademacher complexity and generalization bounds are tight up to $O(1/\sqrt{n})$ where $n$ is the size of the training set. They can be exponentially decayed in the depth $L$ for some neural network structures. The development of Talagrand's contraction lemmas for high-dimensional mappings between function spaces and deep neural networks for general activation functions is a key technical contribution to this work.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States > New York (0.04)
- Asia > Middle East > Jordan (0.04)